-
Notifications
You must be signed in to change notification settings - Fork 18
🐛 set RecheckTime when minSuccessTime is defined #163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 set RecheckTime when minSuccessTime is defined #163
Conversation
WalkthroughReplaced Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
81959d2 to
aab7c47
Compare
|
/assign @qiujian16 |
Signed-off-by: Qing Hao <[email protected]>
aab7c47 to
efe5d8a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
pkg/apis/cluster/v1alpha1/rollout_test.go (1)
2474-2548: Consider adding test case for past RecheckTimes.The test coverage is good, but consider adding a test case where all
RecheckTimevalues are in the past (beforefakeTime). According to the implementation inrollout.go(line 614), only positive durations are considered, so this scenario should returnnil. This would ensure the function correctly handles clusters that have already passed their recheck time.Example test case to add:
{ name: "All RecheckTimes in the past, should return nil", rolloutClusters: []ClusterRolloutStatus{ {ClusterName: "cluster1", Status: Succeeded, RecheckTime: &fakeTime_60s}, {ClusterName: "cluster2", Status: Progressing, RecheckTime: &fakeTime_30s}, }, expectedRecheckAfter: nil, },
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
pkg/apis/cluster/v1alpha1/rollout.go(8 hunks)pkg/apis/cluster/v1alpha1/rollout_test.go(26 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
pkg/apis/cluster/v1alpha1/rollout_test.go (1)
pkg/apis/cluster/v1alpha1/rollout.go (8)
Failed(33-33)Progressing(29-29)ToApply(27-27)ClusterRolloutStatus(42-57)TimeOut(36-36)Succeeded(31-31)RolloutResult(62-73)RolloutClock(19-19)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: verify
- GitHub Check: integration
- GitHub Check: unit
🔇 Additional comments (10)
pkg/apis/cluster/v1alpha1/rollout_test.go (4)
145-155: LGTM!The test expectations correctly reflect the new
RecheckTimesemantics:
- Progressing/Failed clusters have
RecheckTimeset to when their timeout will occur- TimeOut clusters have
RecheckTimeset to when they timed outRecheckAfteris correctly derived as the minimum recheck time
504-563: LGTM!The test correctly validates the interaction between
minSuccessTime(soak period) and timeout behavior:
- cluster1 is in its soak period with
RecheckTimeset to when soak ends- cluster3 is progressing with
RecheckTimeset to when timeout occurs- cluster2 has timed out with
RecheckTimerecording when the timeout occurredRecheckAftercorrectly derives the minimum recheck time (30s)
564-630: Excellent test coverage for mixed timeout and minSuccessTime scenarios.This test case comprehensively validates the interaction between:
- Clusters in their soak period (cluster1)
- Clusters progressing with different remaining timeout durations (cluster2, cluster3)
- Clusters that have completed their soak period (cluster4, correctly excluded)
The inline comments clearly explain each cluster's expected state and the
RecheckAftercalculation is correct (minimum of 30s from cluster1 and cluster2).
1639-1697: Good coverage for progressivePerGroup with minSuccessTime.This test validates the correct handling of soak periods in progressivePerGroup rollouts:
- cluster1 in soak period is correctly included with
RecheckTime- cluster2 outside soak period is correctly excluded
- cluster3 progressing has timeout-based
RecheckTimeRecheckAftercorrectly derives the minimum (30s)The inline comments effectively document the test expectations.
pkg/apis/cluster/v1alpha1/rollout.go (6)
53-56: LGTM!The
RecheckTimefield documentation clearly describes its dual purpose for both soak period tracking (succeeded status) and timeout tracking (progressing/failed status), which aligns with the implementation.
500-515: LGTM!The
calculateRecheckTimehelper function correctly handles all cases:
- Returns
nilwhen duration ismaxTimeDuration(no timeout/soak period configured)- Uses current time when
startTimeisnil- Adds duration to
startTimewhen providedThis is a clean, general-purpose replacement for the previous
getTimeOutTimefunction.
461-498: LGTM!The
determineRolloutStatusfunction correctly implementsRecheckTimetracking:Succeeded status (lines 472-480):
- Calculates when soak period ends using
calculateRecheckTime- Sets
RecheckTimewhen cluster is within soak period- Allows the cluster to be rechecked when soak completes
Progressing/Failed status (lines 486-494):
- Calculates when timeout occurs
- Sets
RecheckTimebefore status determination- For timed-out clusters,
RecheckTimerecords when timeout occurred- For progressing clusters,
RecheckTimeindicates when to check for timeoutThe implementation correctly supports the dual semantics of
RecheckTimefor both soak periods and timeouts.
606-623: LGTM!The refactored
minRecheckAfterfunction is cleaner and more maintainable:Key improvements:
- Simplified signature by removing
minSuccessTimeparameter (information now encoded inRecheckTime)- Derives minimum recheck time directly from
RecheckTimefields- Correctly filters out past recheck times (line 614:
recheckDuration > 0)- Returns
nilwhen no future recheck times existSemantics:
- Works for both timeout tracking (Progressing/Failed) and soak period tracking (Succeeded)
- Handles mixed scenarios correctly by finding the minimum across all cluster states
This aligns well with the per-cluster
RecheckTimeapproach.
311-366: LGTM!All call sites of
minRecheckAfterinprogressivePerClusterare correctly updated to pass onlyrolloutClusters, consistent with the new signature. TheRecheckAfterfield in the returnedRolloutResultis properly populated in all code paths.
428-446: LGTM!All call sites of
minRecheckAfterinprogressivePerGroupare correctly updated to pass onlyrolloutClusters, consistent with the new signature. TheRecheckAfterfield is properly populated in both return paths.
|
/approve |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: haoqing0110, qiujian16 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
a185f88
into
open-cluster-management-io:main
Summary
Related issue(s)
Fixes open-cluster-management-io/ocm#1237
Summary by CodeRabbit
Refactor
Tests
✏️ Tip: You can customize this high-level summary in your review settings.